Goto

Collaborating Authors

 neuromorphic system


Neuromorphic Intelligence

van Gerven, Marcel

arXiv.org Artificial Intelligence

Neuromorphic computing seeks to replicate the remarkable efficiency, flexibility, and adaptability of the human brain in artificial systems. Unlike conventional digital approaches, which suffer from the Von Neumann bottleneck and depend on massive computational and energy resources, neuromorphic systems exploit brain-inspired principles of computation to achieve orders of magnitude greater energy efficiency. By drawing on insights from a wide range of disciplines -- including artificial intelligence, physics, chemistry, biology, neuroscience, cognitive science and materials science -- neuromorphic computing promises to deliver intelligent systems that are sustainable, transparent, and widely accessible. A central challenge, however, is to identify a unifying theoretical framework capable of bridging these diverse disciplines. We argue that dynamical systems theory provides such a foundation. Rooted in differential calculus, it offers a principled language for modeling inference, learning, and control in both natural and artificial substrates. Within this framework, noise can be harnessed as a resource for learning, while differential genetic programming enables the discovery of dynamical systems that implement adaptive behaviors. Embracing this perspective paves the way toward emergent neuromorphic intelligence, where intelligent behavior arises from the dynamics of physical substrates, advancing both the science and sustainability of AI.


DarwinWafer: A Wafer-Scale Neuromorphic Chip

Zhu, Xiaolei, Jin, Xiaofei, Kang, Ziyang, Sun, Chonghui, Feng, Junjie, Hu, Dingwen, Wang, Zengyi, Zhuang, Hanyue, Zheng, Qian, Tang, Huajin, Gu, Shi, Du, Xin, Ma, De, Pan, Gang

arXiv.org Artificial Intelligence

Neuromorphic computing promises brain-like efficiency, yet today's multi-chip systems scale over PCBs and incur orders-of-magnitude penalties in bandwidth, latency, and energy, undermining biological algorithms and system efficiency. We present DarwinWafer, a hyperscale system-on-wafer that replaces off-chip interconnects with wafer-scale, high-density integration of 64 Darwin3 chiplets on a 300 mm silicon interposer. A GALS NoC within each chiplet and an AER-based asynchronous wafer fabric with hierarchical time-step synchronization provide low-latency, coherent operation across the wafer. Each chiplet implements 2.35 M neurons and 0.1 B synapses, yielding 0.15 B neurons and 6.4 B synapses per wafer.At 333 MHz and 0.8 V, DarwinWafer consumes ~100 W and achieves 4.9 pJ/SOP, with 64 TSOPS peak throughput (0.64 TSOPS/W). Realization is enabled by a holistic chiplet-interposer co-design flow (including an in-house interposer-bump planner with early SI/PI and electro-thermal closure) and a warpage-tolerant assembly that fans out I/O via PCBlets and compliant pogo-pin connections, enabling robust, demountable wafer-to-board integration. Measurements confirm 10 mV supply droop and a uniform thermal profile (34-36 °C) under ~100 W. Application studies demonstrate whole-brain simulations: two zebrafish brains per chiplet with high connectivity fidelity (Spearman r = 0.896) and a mouse brain mapped across 32 chiplets (r = 0.645). To our knowledge, DarwinWafer represents a pioneering demonstration of wafer-scale neuromorphic computing, establishing a viable and scalable path toward large-scale, brain-like computation on silicon by replacing PCB-level interconnects with high-density, on-wafer integration.


FeNN: A RISC-V vector processor for Spiking Neural Network acceleration

Aizaz, Zainab, Knight, James C., Nowotny, Thomas

arXiv.org Artificial Intelligence

Spiking Neural Networks (SNNs) have the potential to drastically reduce the energy requirements of AI systems. However, mainstream accelerators like GPUs and TPUs are designed for the high arithmetic intensity of standard ANNs so are not well-suited to SNN simulation. FPGAs are well-suited to applications with low arithmetic intensity as they have high off-chip memory bandwidth and large amounts of on-chip memory. Here, we present a novel RISC-V-based soft vector processor (FeNN), tailored to simulating SNNs on FPGAs. Unlike most dedicated neuromorphic hardware, FeNN is fully programmable and designed to be integrated with applications running on standard computers from the edge to the cloud. We demonstrate that, by using stochastic rounding and saturation, FeNN can achieve high numerical precision with low hardware utilisation and that a single FeNN core can simulate an SNN classifier faster than both an embedded GPU and the Loihi neuromorphic system.


Voltage Mode Winner-Take-All Circuit for Neuromorphic Systems

Zyarah, Abdullah M., Kudithipudi, Dhireesha

arXiv.org Artificial Intelligence

Recent advances in neuromorphic computing demonstrate on-device learning capabilities with low power consumption. One of the key learning units in these systems is the winner-take-all circuit. In this research, we propose a winner-take-all circuit that can be configured to achieve k-winner and hysteresis properties, simulated in IBM 65 nm node. The circuit dissipated 34.9 $μ$W of power with a latency of 10.4 ns, while processing 1000 inputs. The utility of the circuit is demonstrated for spatial filtering and classification.


Neuromorphic Programming: Emerging Directions for Brain-Inspired Hardware

Abreu, Steven, Pedersen, Jens E.

arXiv.org Artificial Intelligence

The value of brain-inspired neuromorphic computers critically depends on our ability to program them for relevant tasks. Currently, neuromorphic hardware often relies on machine learning methods adapted from deep learning. However, neuromorphic computers have potential far beyond deep learning if we can only harness their energy efficiency and full computational power. Neuromorphic programming will necessarily be different from conventional programming, requiring a paradigm shift in how we think about programming. This paper presents a conceptual analysis of programming within the context of neuromorphic computing, challenging conventional paradigms and proposing a framework that aligns more closely with the physical intricacies of these systems. Our analysis revolves around five characteristics that are fundamental to neuromorphic programming and provides a basis for comparison to contemporary programming methods and languages. By studying past approaches, we contribute a framework that advocates for underutilized techniques and calls for richer abstractions to effectively instrument the new hardware class.


DelGrad: Exact gradients in spiking networks for learning transmission delays and weights

Göltz, Julian, Weber, Jimmy, Kriener, Laura, Lake, Peter, Payvand, Melika, Petrovici, Mihai A.

arXiv.org Artificial Intelligence

Spiking neural networks (SNNs) inherently rely on the timing of signals for representing and processing information. Transmission delays play an important role in shaping these temporal characteristics. Recent work has demonstrated the substantial advantages of learning these delays along with synaptic weights, both in terms of accuracy and memory efficiency. However, these approaches suffer from drawbacks in terms of precision and efficiency, as they operate in discrete time and with approximate gradients, while also requiring membrane potential recordings for calculating parameter updates. To alleviate these issues, we propose an analytical approach for calculating exact loss gradients with respect to both synaptic weights and delays in an event-based fashion. The inclusion of delays emerges naturally within our proposed formalism, enriching the model's search space with a temporal dimension. Our algorithm is purely based on the timing of individual spikes and does not require access to other variables such as membrane potentials. We explicitly compare the impact on accuracy and parameter efficiency of different types of delays - axonal, dendritic and synaptic. Furthermore, while previous work on learnable delays in SNNs has been mostly confined to software simulations, we demonstrate the functionality and benefits of our approach on the BrainScaleS-2 neuromorphic platform.


Advancing Neuromorphic Computing: Mixed-Signal Design Techniques Leveraging Brain Code Units and Fundamental Code Units

Isik, Murat, Miziev, Sols, Pawlak, Wiktoria, Howard, Newton

arXiv.org Artificial Intelligence

This paper introduces a groundbreaking digital neuromorphic architecture that innovatively integrates Brain Code Unit (BCU) and Fundamental Code Unit (FCU) using mixedsignal design methodologies. Leveraging open-source datasets and the latest advances in materials science, our research focuses on enhancing the computational efficiency, accuracy, and adaptability of neuromorphic systems. The core of our approach lies in harmonizing the precision and scalability of digital systems with the robustness and energy efficiency of analog processing. Through experimentation, we demonstrate the effectiveness of our system across various metrics. The BCU achieved an accuracy of 88.0% and a power efficiency of 20.0 GOP/s/W, while the FCU recorded an accuracy of 86.5% and a power efficiency of 18.5 GOP/s/W. Our mixed-signal design approach significantly improved latency and throughput, achieving a latency as low as 0.75 ms and throughput up to 213 TOP/s. These results firmly establish the potential of our architecture in neuromorphic computing, providing a solid foundation for future developments in this domain. Our study underscores the feasibility of mixedsignal neuromorphic systems and their promise in advancing the field, particularly in applications requiring high efficiency and adaptability


NEUROSEC: FPGA-Based Neuromorphic Audio Security

Isik, Murat, Vishwamith, Hiruna, Sur, Yusuf, Inadagbo, Kayode, Dikmen, I. Can

arXiv.org Artificial Intelligence

Neuromorphic systems, inspired by the complexity and functionality of the human brain, have gained interest in academic and industrial attention due to their unparalleled potential across a wide range of applications. While their capabilities herald innovation, it is imperative to underscore that these computational paradigms, analogous to their traditional counterparts, are not impervious to security threats. Although the exploration of neuromorphic methodologies for image and video processing has been rigorously pursued, the realm of neuromorphic audio processing remains in its early stages. Our results highlight the robustness and precision of our FPGA-based neuromorphic system. Specifically, our system showcases a commendable balance between desired signal and background noise, efficient spike rate encoding, and unparalleled resilience against adversarial attacks such as FGSM and PGD. A standout feature of our framework is its detection rate of 94%, which, when compared to other methodologies, underscores its greater capability in identifying and mitigating threats within 5.39 dB, a commendable SNR ratio. Furthermore, neuromorphic computing and hardware security serve many sensor domains in mission-critical and privacy-preserving applications.


Polariton lattices as binarized neuromorphic networks

Sedov, Evgeny, Kavokin, Alexey

arXiv.org Artificial Intelligence

We introduce a novel neuromorphic network architecture based on a lattice of exciton-polariton condensates, intricately interconnected and energized through non-resonant optical pumping. The network employs a binary framework, where each neuron, facilitated by the spatial coherence of pairwise coupled condensates, performs binary operations. This coherence, emerging from the ballistic propagation of polaritons, ensures efficient, network-wide communication. The binary neuron switching mechanism, driven by the nonlinear repulsion through the excitonic component of polaritons, offers computational efficiency and scalability advantages over continuous weight neural networks. Our network enables parallel processing, enhancing computational speed compared to sequential or pulse-coded binary systems. The system's performance was evaluated using the MNIST dataset for handwritten digit recognition, showcasing the potential to outperform existing polaritonic neuromorphic systems, as demonstrated by its impressive predicted classification accuracy of up to 97.5%.


Analysis on Effects of Fault Elements in Memristive Neuromorphic Systems

Lee, Hyun-Jong, Lim, Jae-Han

arXiv.org Artificial Intelligence

Nowadays, neuromorphic systems based on Spiking Neural Networks (SNNs) attract attentions of many researchers. There are many studies to improve performances of neuromorphic systems. These studies have been showing satisfactory results. To magnify performances of neuromorphic systems, developing actual neuromorphic systems is essential. For developing them, memristors play key role due to their useful characteristics. Although memristors are essential for actual neuromorphic systems, they are vulnerable to faults. However, there are few studies analyzing effects of fault elements in neuromorphic systems using memristors. To solve this problem, we analyze performance of a memristive neuromorphic system with fault elements changing fault ratios, types, and positions. We choose neurons and synapses to inject faults. We inject two types of faults to synapses: SA0 and SA1 faults. The fault synapses appear in random and important positions. Through our analysis, we discover the following four interesting points. First, memristive characteristics increase vulnerability of neuromorphic systems to fault elements. Second, fault neuron ratios reducing performance sharply exist. Third, performance degradation by fault synapses depends on fault types. Finally, SA1 fault synapses improve performance when they appear in important positions.